The Position Embedding (PE) is critical for Vision Transformers (VTs) due to the permutation-invariance of self-attention operation. By analyzing the input and output of each encoder layer in VTs using reparameterization and visualization, we find that the default PE joining method (simply adding the PE and patch embedding together) operates the same affine transformation to token embedding and PE, which limits the expressiveness of PE and hence constrains the performance of VTs. To overcome this limitation, we propose a simple, effective, and robust method. Specifically, we provide two independent layer normalizations for token embeddings and PE for each layer, and add them together as the input of each layer's Muti-Head Self-Attention module. Since the method allows the model to adaptively adjust the information of PE for different layers, we name it as Layer-adaptive Position Embedding, abbreviated as LaPE. Extensive experiments demonstrate that LaPE can improve various VTs with different types of PE and make VTs robust to PE types. For example, LaPE improves 0.94% accuracy for ViT-Lite on Cifar10, 0.98% for CCT on Cifar100, and 1.72% for DeiT on ImageNet-1K, which is remarkable considering the negligible extra parameters, memory and computational cost brought by LaPE. The code is publicly available at https://github.com/Ingrid725/LaPE.
translated by 谷歌翻译
基于文本的视觉问题回答〜(TextVQA)旨在为具有多个场景文本的图像问题提供正确的答案。在大多数情况下,文本自然附着在物体表面上。因此,文本和对象之间的空间推理在文本VQA中至关重要。但是,现有方法在从输入图像中学到的2D空间信息中受到限制,并依靠基于变压器的体系结构在融合过程中隐含地推理。在此设置下,这些2D空间推理方法无法区分同一图像平面上的视觉对象和场景文本之间的细颗粒空间关系,从而损害了TextVQA模型的可解释性和性能。在本文中,我们将3D几何信息引入了类似人类的空间推理过程,以逐步捕获关键对象的上下文知识。 %我们通过引入3D几何信息来捕获关键对象的上下文知识来制定类似人类的空间推理过程。为了增强模型对3D空间关系的理解,特别是(i)〜我们提出了一个关系预测模块,以准确定位关键对象的关注区域; (ii)〜我们设计了一个深度感知的注意校准模块,以根据关键对象校准OCR令牌的注意力。广泛的实验表明,我们的方法在TextVQA和ST-VQA数据集上实现了最先进的性能。更令人鼓舞的是,我们的模型在涉及TextVQA和ST-VQA有效拆分中的空间推理的问题上以5.7 \%和12.1 \%的明显边缘超过了他人。此外,我们还验证了模型对基于文本的图像字幕任务的普遍性。
translated by 谷歌翻译
虽然视觉变压器(VT)体系结构在计算机视觉中越来越流行,但纯VT模型在微小的数据集上的性能较差。为了解决这个问题,本文提出了改善小型数据集VT性能的地方指南。我们首先分析,由于VTS中自我注意的机制的高灵活性和内在的全球性,因此很难用有限的数据来学习局部信息,这对于理解图像非常重要。为了促进本地信息,我们通过模仿已经训练有素的卷积神经网络(CNN)的特征来实现VT的当地指南,灵感来自CNN的内置本地到全球层次结构。在我们的双任务学习范式下,由低分辨率图像训练的轻型CNN提供的局部指导足以加速收敛并在很大程度上提高VT的性能。因此,我们的本地指导方法非常简单有效,可以作为小型数据集中VT的基本性能增强方法。广泛的实验表明,我们的方法在小型数据集中从头开始训练时可以显着改善VT,并且与不同种类的VT和数据集兼容。例如,我们提出的方法可以将各种VT在微型数据集上的性能提高(例如,DEIT 13.07%,T2T为8.98%,PVT为7.85%),并使更强大的基线PVTV2提高了1.86%至79.30%,显示出来小型数据集上的VT潜力。该代码可从https://github.com/lkhl/tiny-transformers获得。
translated by 谷歌翻译
在本文中,我们表明样品的欧几里得规范的差异可以在空间翻译和划分归一化之后对语义差异甚至混乱做出贡献。为了解决这个问题,我们提出了一种直观但有效的方法,以均衡样品向量的欧几里得规范。具体来说,我们$ l_2 $ - 在批准之前将每个样品向量归一化,因此样品向量的幅度相同。由于所提出的方法结合了$ L_2 $归一化和批量归一化,因此我们将我们的方法称为$ L_2 $ bn。 $ l_2 $ bn可以增强阶层内特征的紧凑性,并扩大阶层间特征的差异。此外,它可以帮助梯度收敛到稳定的量表。 $ l_2 $ bn易于实现,并且可以在没有任何其他参数和超参数的情况下发挥其效果。因此,它可以用作神经网络的基本归一化方法。我们通过对图像分类和声学场景分类任务进行各种模型的广泛实验来评估$ L_2 $亿美元的有效性。实验结果表明,$ L_2 $ bn能够提高各种神经网络模型的概括能力,并取得了可观的性能改进。
translated by 谷歌翻译
为了获得下游图像信号过程(ISP)的高质量的原始图像,在本文中,我们提出了一个有效的本地乘法变压器,称为ELMFORMER,用于原始图像恢复。 Elmformer包含两个核心设计,尤其是针对原始属性是单渠道的原始图像。第一个设计是双向融合投影(BFP)模块,我们考虑了原始图像的颜色特征和单渠道的空间结构。第二个是我们提出了一个本地乘法自我注意力(L-MSA)方案,以有效地从当地空间传递信息到相关部分。 Elmformer可以有效地减少计算消耗,并在原始图像恢复任务上表现良好。通过这两种核心设计,Elmformer提高了最高的性能,并且与最先进的机构相比,原始DeNoising和原始Deblurring基准测试最低。广泛的实验证明了Elmformer的优势和概括能力。在SIDD基准测试中,我们的方法比基于ISP的方法具有更好的降解性能,这些方法需要大量的额外的SRGB培训图像。这些代码在https://github.com/leonmakise/elmformer上发布。
translated by 谷歌翻译
由于其简单性和效率,卑鄙的速度算法已被广泛用于跟踪任务。但是,传统的刻痕算法需要标记目标的初始区域,从而降低了算法的适用性。此外,它仅适用于目标区域和候选区域之间的重叠率较高的现场。因此,当目标速度快速时,目标尺度变化,形状变形或目标闭塞会发生,跟踪性能将恶化。在本文中,我们通过开发一种跟踪方法来解决上述挑战,该方法将背景模型和刻画框架下的颜色名称的分级特征结合在一起。在上述情况下,此方法可显着提高性能。此外,它促进了检测准确性和检测速度之间的平衡。实验结果证明了该方法的验证。
translated by 谷歌翻译
现成的单阶段多人姿势回归方法通常利用实例得分(即,实例定位的置信度)来指示用于选择姿势候选的姿势质量。我们认为现有范式中有两个差距:〜1)实例分数与姿势回归质量不充分相互关联。〜2)实例特征表示,用于预测实例分数,不会明确地编码结构构成信息预测代表姿势回归质量的合理分数。为了解决上述问题,我们建议学习姿势回归质量感知的表现。具体地,对于第一间隙,而不是使用前一个实例置信度标签(例如,离散{1,0}或高斯表示)来表示人类实例的位置和置信度,我们首先介绍一个统一的实例表示(cir)构成回归质量分数的实例和背景到像素明智的评分映射的置信度,以校准实例分数与姿势回归质量之间的不一致。为了填充第二间隙,我们进一步提出了包括KeyPoint查询编码(KQE)的查询编码模块(QEM)来对每个键盘的位置和语义信息和姿态查询编码(PQE)进行编码,该姿势查询编码(PQE)明确地编码预测的结构姿势信息为了更好地拟合一致的实例表示(CIR)。通过使用拟议的组件,我们显着减轻了上述空白。我们的方法优于以前的基于单级回归的甚至自下而上的方法,实现了71.7 AP在MS Coco Test-Dev集上的最先进结果。
translated by 谷歌翻译
多人姿态估计方法通常遵循自上而下和自下而上的范式,两者都可以被认为是两级方法,从而导致高计算成本和低效率。在这篇文章中,向多人姿态估计任务的紧凑且有效的管道迈进,我们建议将人类部位代表为点并提出一种新的身体表示,它利用包括人类中心和七个人部分的自适应点集合以更细粒度的方式代表人类案。新颖的表示更能够捕获各种姿态变形,并自适应地将远程中心到关节位移进行自适应地分解,因此将单级可分子网络传递到更准确的返回多人姿势,称为适应性。对于推理,我们所提出的网络消除了分组以及改进,只需要单步解开过程来形成多人姿势。如果没有任何铃声和吹口哨,我们通过在Coco Test-Dev数据集上实现了DLA-34和71.3%AP / 9.1 FPS的最佳速度准确性折衷67.4%AP / 29.4 FPS。
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译